79 research outputs found

    Direction of arrival estimation using robust complex Lasso

    Full text link
    The Lasso (Least Absolute Shrinkage and Selection Operator) has been a popular technique for simultaneous linear regression estimation and variable selection. In this paper, we propose a new novel approach for robust Lasso that follows the spirit of M-estimation. We define MM-Lasso estimates of regression and scale as solutions to generalized zero subgradient equations. Another unique feature of this paper is that we consider complex-valued measurements and regression parameters, which requires careful mathematical characterization of the problem. An explicit and efficient algorithm for computing the MM-Lasso solution is proposed that has comparable computational complexity as state-of-the-art algorithm for computing the Lasso solution. Usefulness of the MM-Lasso method is illustrated for direction-of-arrival (DoA) estimation with sensor arrays in a single snapshot case.Comment: Paper has appeared in the Proceedings of the 10th European Conference on Antennas and Propagation (EuCAP'2016), Davos, Switzerland, April 10-15, 201

    Multichannel sparse recovery of complex-valued signals using Huber's criterion

    Full text link
    In this paper, we generalize Huber's criterion to multichannel sparse recovery problem of complex-valued measurements where the objective is to find good recovery of jointly sparse unknown signal vectors from the given multiple measurement vectors which are different linear combinations of the same known elementary vectors. This requires careful characterization of robust complex-valued loss functions as well as Huber's criterion function for the multivariate sparse regression problem. We devise a greedy algorithm based on simultaneous normalized iterative hard thresholding (SNIHT) algorithm. Unlike the conventional SNIHT method, our algorithm, referred to as HUB-SNIHT, is robust under heavy-tailed non-Gaussian noise conditions, yet has a negligible performance loss compared to SNIHT under Gaussian noise. Usefulness of the method is illustrated in source localization application with sensor arrays.Comment: To appear in CoSeRa'15 (Pisa, Italy, June 16-19, 2015). arXiv admin note: text overlap with arXiv:1502.0244

    Nonparametric Simultaneous Sparse Recovery: an Application to Source Localization

    Full text link
    We consider multichannel sparse recovery problem where the objective is to find good recovery of jointly sparse unknown signal vectors from the given multiple measurement vectors which are different linear combinations of the same known elementary vectors. Many popular greedy or convex algorithms perform poorly under non-Gaussian heavy-tailed noise conditions or in the face of outliers. In this paper, we propose the usage of mixed β„“p,q\ell_{p,q} norms on data fidelity (residual matrix) term and the conventional β„“0,2\ell_{0,2}-norm constraint on the signal matrix to promote row-sparsity. We devise a greedy pursuit algorithm based on simultaneous normalized iterative hard thresholding (SNIHT) algorithm. Simulation studies highlight the effectiveness of the proposed approaches to cope with different noise environments (i.i.d., row i.i.d, etc) and outliers. Usefulness of the methods are illustrated in source localization application with sensor arrays.Comment: Paper appears in Proc. European Signal Processing Conference (EUSIPCO'15), Nice, France, Aug 31 -- Sep 4, 201

    On asymptotics of ICA estimators and their performance indices

    Full text link
    Independent component analysis (ICA) has become a popular multivariate analysis and signal processing technique with diverse applications. This paper is targeted at discussing theoretical large sample properties of ICA unmixing matrix functionals. We provide a formal definition of unmixing matrix functional and consider two popular estimators in detail: the family based on two scatter matrices with the independence property (e.g., FOBI estimator) and the family of deflation-based fastICA estimators. The limiting behavior of the corresponding estimates is discussed and the asymptotic normality of the deflation-based fastICA estimate is proven under general assumptions. Furthermore, properties of several performance indices commonly used for comparison of different unmixing matrix estimates are discussed and a new performance index is proposed. The proposed index fullfills three desirable features which promote its use in practice and distinguish it from others. Namely, the index possesses an easy interpretation, is fast to compute and its asymptotic properties can be inferred from asymptotics of the unmixing matrix estimate. We illustrate the derived asymptotical results and the use of the proposed index with a small simulation study

    Block-wise Minimization-Majorization algorithm for Huber's criterion: sparse learning and applications

    Full text link
    Huber's criterion can be used for robust joint estimation of regression and scale parameters in the linear model. Huber's (Huber, 1981) motivation for introducing the criterion stemmed from non-convexity of the joint maximum likelihood objective function as well as non-robustness (unbounded influence function) of the associated ML-estimate of scale. In this paper, we illustrate how the original algorithm proposed by Huber can be set within the block-wise minimization majorization framework. In addition, we propose novel data-adaptive step sizes for both the location and scale, which are further improving the convergence. We then illustrate how Huber's criterion can be used for sparse learning of underdetermined linear model using the iterative hard thresholding approach. We illustrate the usefulness of the algorithms in an image denoising application and simulation studies.Comment: To appear in International Workshop on Machine Learning for Signal Processing (MLSP), 202
    • …
    corecore